Convergence Analysis of Deflected Conditional Approximate Subgradient Methods
نویسندگان
چکیده
منابع مشابه
Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods
In this paper, we study methods for generating approximate primal solutions as a by-product of subgradient methods applied to the Lagrangian dual of a primal convex (possibly nondifferentiable) constrained optimization problem. Our work is motivated by constrained primal problems with a favorable dual problem structure that leads to efficient implementation of dual subgradient methods, such as ...
متن کاملDuality between subgradient and conditional gradient methods
Given a convex optimization problem and its dual, there are many possible firstorder algorithms. In this paper, we show the equivalence between mirror descent algorithms and algorithms generalizing the conditional gradient method. This is done through convex duality and implies notably that for certain problems, such as for supervised machine learning problems with nonsmooth losses or problems ...
متن کاملApproximate Subgradient Methods over Lagrangian Relaxations on Networks
where A is a node-arc incidencem×n-matrix, b is the production/demand m-vector, x are the flows on the arcs of the network represented by A, and x are the capacity bounds imposed on the flows of each arc. • The side constraints c(x) ≤ 0 are defined by c : <n → <r, such that c = [c1, · · · , cr], where ci(x) is either linear, or nonlinear and twice continuously differentiable on the feasible set...
متن کاملApproximate Subgradient Methods for Lagrangian Relaxations on Networks
Nonlinear network flow problems with linear/nonlinear side constraints can be solved by means of Lagrangian relaxations. The dual problem is the maximization of a dual function whose value is estimated by minimizing approximately a Lagrangian function on the set defined by the network constraints. We study alternative stepsizes in the approximate subgradient methods to solve the dual problem. S...
متن کاملA Unifying Framework for Convergence Analysis of Approximate Newton Methods
Many machine learning models are reformulated as optimization problems. Thus, it is important to solve a large-scale optimization problem in big data applications. Recently, subsampled Newton methods have emerged to attract much attention for optimization due to their efficiency at each iteration, rectified a weakness in the ordinary Newton method of suffering a high cost in each iteration whil...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SIAM Journal on Optimization
سال: 2009
ISSN: 1052-6234,1095-7189
DOI: 10.1137/080718814